Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 18 submissions in the queue.

Log In

Log In

Create Account  |  Retrieve Password


Site News

Join our Folding@Home team:
Main F@H site
Our team page


Funding Goal
For 6-month period:
2022-07-01 to 2022-12-31
(All amounts are estimated)
Base Goal:
$3500.00

Currently:
$438.92

12.5%

Covers transactions:
2022-07-02 10:17:28 ..
2022-10-05 12:33:58 UTC
(SPIDs: [1838..1866])
Last Update:
2022-10-05 14:04:11 UTC --fnord666

Support us: Subscribe Here
and buy SoylentNews Swag


We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.

How long has it been since you last tested your backups? Honestly?

  • one day
  • one week
  • one month
  • one year
  • more than one year
  • never tested my backups
  • what are backups?
  • of course they will work, they are in a repo!?....

[ Results | Polls ]
Comments:40 | Votes:90

posted by hubie on Saturday May 17, @07:39PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

So, we have a non-functional market...

That is one of the prevailing messages dished out by the cyber arm of the British intelligence squad at GCHQ's National Cyber Security Centre (NCSC) in recent years at its annual conference. The cyber agency's CTO, Ollie Whitehouse, first pitched the idea during a keynote at last year's event, and once again it was a primary talking point of this week's CYBERUK, but not one that went down well with everyone.

Whitehouse said this week that "the market does not currently support and reward those companies that make that investment and build secure products." The risks introduced here are then shouldered by customers – companies, governments – rather than the vendors themselves.

"So, we have a non-functional market," he added.

"When we need to build an ecosystem that's capable of meeting this modern threat, we have to find ways where we can incentivize those vendors to be rewarded for their hard work, for those that go the extra mile, for those that build the secure technologies which our foundations are going to rely on in the future.

"Those that build secure technology make prosperous companies. They make celebrated companies, and they make successful companies ultimately. Because without that, nothing changes, and we repeat the last 40 years."

That's the NCSC's line – one that will most likely resonate with any organization popped by one of the myriad decades-old vulns vendors can't seem to stamp out. 

But there is a disconnect between the agency's message and the views of major players elsewhere in the industry. From first being pitched as a necessary play for a more cyber-secure ecosystem, now the agency's steadfast stance on the matter has become a question of whether or not to intervene.

[...] McKenzie's take was that customers will ultimately drive vendor change. If they start prioritizing security, that's what vendors will give them. A string of cockups will quickly out those who don't provide value, and then it becomes a case of having to improve to survive.

He said: "I think there are only some products where I think maybe, you know, they're a little bit smoke and mirrors, but I think that's rare, and then it quickly becomes known in the market that they don't work. So, I don't agree. I think there's absolutely a market, and there is a return on investment for security and resilience."

Likewise, Walsh highlighted that cybersecurity failures are costly for organizations, alluding to the fact that victims of security snafus will certainly consider the ROI when deciding to renew, or not renew, certain vendor contracts.

Aung downplayed the idea of the need for improved incentives too, saying "there are certainly organizations out there who are cutting corners knowingly and putting their customers at risk knowingly. But, I think the vast majority are just grappling with [various external factors] and in an arms race at the same time. So I think it's a complex picture."

[...] Whitehouse put forth the idea of perhaps punishing vendors that fall short of expectations, not just incentivizing them to do better, during last year's CYBERUK, and this was again put on the table this week, with his industry peers once more siding against the CTO's stance.

If you look at someone like CrowdStrike or Microsoft Defender, they did really well in that endpoint marketplace because they provided the most features...

McKenzie said "he's not a fan" of the idea. In his view, it goes back to customers eventually abandoning sub-par vendors and, when speaking to The Register, he pointed to historical events that illustrate how the market itself will drive change.

"What we need is we need purchasers of security to prioritize the features and functionalities they want and then incentivize those organizations.

"If you look at someone like CrowdStrike or Microsoft Defender, they did really well in that endpoint marketplace because they provided the most features. There are other things that weren't as good. They don't grow."

With the shift from antivirus to EDR, vendors that offer the best will perform the best, he argued. 

[...] Parallels can be drawn with the automotive industry. The European NCAP program was introduced in the late 1990s, providing customers an easy way to understand how different manufacturers were performing on safety.

Before that, we had the likes of Volvo scooping up swathes of market share off the back of its reputation for producing safe cars, or German and Japanese brands for their reliability.

Perhaps the same principles could apply to security vendors, all vying for stellar, market-shifting trustworthiness. And then it goes back to purchasers dictating which security vendors end up doing well.

[...] Whitehouse said: "Some of you would have heard me say that... we know more that's in our sausages than our software, and that's probably not right for 2025, so the food labelling standards are coming to software soon. You heard it here first."


Original Submission

posted by hubie on Saturday May 17, @02:50PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

The European Vulnerability Database (EUVD) is now fully operational, offering a streamlined platform to monitor critical and actively exploited security flaws amid the US struggles with budget cuts, delayed disclosures, and confusion around the future of its own tracking systems.

As of Tuesday, the full-fledged version of the website is up and running.

"The EU is now equipped with an essential tool designed to substantially improve the management of vulnerabilities and the risks associated with it," ENISA Executive Director Juhan Lepassaar said in a statement announcing the EUVD. 

"The database ensures transparency to all users of the affected ICT products and services and will stand as an efficient source of information to find mitigation measures," Lepassaar continued.

The European Union Agency for Cybersecurity (ENISA) first announced the project in June 2024 under a mandate from the EU's Network and Information Security 2 Directive, and quietly rolled out a limited-access beta version last month during a period of uncertainty surrounding the United States' Common Vulnerabilities and Exposures (CVE) program

More broadly, Uncle Sam has been hard at work slashing CISA and other cybersecurity funding while key federal employees responsible for the US government's secure-by-design program have jumped ship

Plus, on Monday, CISA said it would no longer publish routine alerts - including those detailing exploited vulnerabilities - on its public website. Instead, these updates will be delivered via email, RSS feeds, and the agency's account on X.

With all this, a cybersecurity professional could be forgiven for doubting the US government's commitment to hardening networks and rooting out vulnerabilities.

Enter the EUVD. The EUVD is similar to the US government's National Vulnerability Database (NVD) in that it identifies each disclosed bug (with both a CVE-assigned ID and its own EUVD identifier), notes the vulnerability's criticality and exploitation status, and links to available advisories and patches.

Unlike the NVD, which is still struggling with a backlog of vulnerability submissions and is not very easy to navigate, the EUVD is updated in near real-time and highlights both critical and exploited vulnerabilities at the top of the site.

The EUVD provides three dashboard views: one for critical vulnerabilities, one for those actively exploited, and one for those coordinated by members of the EU CSIRTs network.

Information is sourced from open-source databases as well as advisories and alerts issued by national CSIRTs, mitigation and patching guidelines published by vendors, and exploited vulnerability details.

ENISA is also a CVE Numbering Authority (CNA), meaning it can assign CVE identifiers and coordinate vulnerability disclosures under the CVE program. Even as an active CNA, however, ENISA seems to be in the dark about what's next for the embattled US-government-funded CVE program, which is only under contract with MITRE until next March.

The launch announcement notes that "ENISA is in contact with MITRE to understand the impact and next steps following the announcement on the funding to the Common Vulnerabilities and Exposures Program."


Original Submission

posted by hubie on Saturday May 17, @10:05AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

On Monday, the US Court of Appeals for the Federal Circuit said scientists Jennifer Doudna and Emmanuelle Charpentier will get another chance to show they ought to own the key patents on what many consider the defining biotechnology invention of the 21st century.

The pair shared a 2020 Nobel Prize for developing the versatile gene-editing system, which is already being used to treat various genetic disorders, including sickle cell disease

But when key US patent rights were granted in 2014 to researcher Feng Zhang of the Broad Institute of MIT and Harvard, the decision set off a bitter dispute in which hundreds of millions of dollars—as well as scientific bragging rights—are at stake.

[...] The CRISPR patent battle is among the most byzantine ever, putting the technology alongside the steam engine, the telephone, the lightbulb, and the laser among the most hotly contested inventions in history.

In 2012, Doudna and Charpentier were first to publish a description of a CRISPR gene editor that could be programmed to precisely cut DNA in a test tube. There’s no dispute about that.

However, the patent fight relates to the use of CRISPR to edit inside animal cells—like those of human beings. That’s considered a distinct invention, and one both sides say they were first to come up with that very same year. 

In patent law, this moment is known as conception—the instant a lightbulb appears over an inventor’s head, revealing a definite and workable plan for how an invention is going to function.

In 2022, a specialized body called the Patent Trial and Appeal Board, or PTAB, decided that Doudna and Charpentier hadn’t fully conceived the invention because they initially encountered trouble getting their editor to work in fish and other species. Indeed, they had so much trouble that Zhang scooped them with a 2013 publication demonstrating he could use CRISPR to edit human cells.

There’s a surprise twist in the battle to control genome editing.

The Nobelists appealed the finding, and yesterday the appeals court vacated it, saying the patent board applied the wrong standard and needs to reconsider the case. 

According to the court, Doudna and Charpentier didn’t have to “know their invention would work” to get credit for conceiving it. What could matter more, the court said, is that it actually did work in the end. 

[...] The decision is likely to reopen the investigation into what was written in 13-year-old lab notebooks and whether Zhang based his research, in part, on what he learned from Doudna and Charpentier’s publications. 

The case will now return to the patent board for a further look, although Sherkow says the court finding can also be appealed directly to the US Supreme Court.


Original Submission

posted by hubie on Saturday May 17, @05:15AM   Printer-friendly

https://www.bleepingcomputer.com/news/security/bluetooth-61-enhances-privacy-with-randomized-rpa-timing/

By Bill Toulas (May 11, 2025)

The Bluetooth Special Interest Group (SIG) has announced Bluetooth Core Specification 6.1, bringing important improvements to the popular wireless communication protocol.

One new feature highlighted in the latest release is the increased device privacy via randomized Resolvable Private Addresses (RPA) updates.

"Randomizing the timing of address changes makes it much more difficult for third parties to track or correlate device activity over time," reads SIG's announcement.

A Resolvable Private Address (RPA) is a Bluetooth address created to look random and is used in place of a device's fixed MAC address to protect user privacy. It allows trusted devices to securely reconnect without revealing their true identity.

[...] The Controller picks a random value in the defined range using a NIST-approved random number generator, and updates the RPA. This makes tracking significantly harder, as there is no pattern in the value selection.

More details about how the new privacy feature works can be found in the specification document published along with the announcement.

Another feature highlighted in the announcement is better power efficiency starting from Bluetooth 6.1, which stems from allowing the chip (Controller) to autonomously handle the randomized RPA updates.

[...] While Bluetooth 6.1 has made exciting steps forward, it's important to underline that actual support in hardware and firmware may take years to arrive.

The first wave of chips with Bluetooth 6.1 should not be realistically expected before 2026, and even then, early implementations may not immediately expose all the newly available features, as testing and validation may be required.


Original Submission

posted by hubie on Saturday May 17, @12:31AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Looks like inflated GPU prices are here to stay

A new report claims that Nvidia has recently raised the official prices of nearly all of its products to combat the impact of tariffs and surging manufacturing costs on its business, with gaming graphics cards receiving a 5 to 10% hike while AI GPUs see up to a 15% increase.

As reported by Digitimes Taiwan (translated), Nvidia is facing "multiple crises," including a $5.5 billion hit to its quarterly earnings over export restrictions on AI chips, including a ban on sales of its H20 chips to China.

Digitimes reports that CEO Jensen Huang has been "shuttling back and forth" between the US and China to minimize the impact of tariffs, and that "in order to maintain stable profitability," Nvidia has reportedly recently raised official prices for almost all its products, allowing its partners to increase prices accordingly.

Despite the hikes, Digitimes claims Nvidia's financial report at the end of the month "should be within financial forecasts and deliver excellent profit results," driven by strong demand for AI chips outside of China and the expanding spending from cloud service providers.

The report states that Nvidia has applied official price hikes to numerous products to keep its earnings stable, with partners following suit. As an example, Digitimes cites the RTX 5090, bought at premium prices upon release without hesitation, such that channel pricing "quickly doubled."

The report notes that following the AI chip ban, RTX 5090 prices climbed further still, surging overnight from around NT$90,000 to NT$100,000, with other RTX 50 series cards also increasing by 5-10%. Digitimes notes Nvidia has also raised the price of its H200 and B200 chips, with server vendors increasing prices by up to 15% accordingly.

According to the publication's supply chain sources, price hikes have been exacerbated by the shift of Blackwell chip production to TSMC's US plant, which has driven a significant rise in the price of production, materials, and logistics.


Original Submission

posted by janrinok on Friday May 16, @07:45PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

One of the ultimate goals of medieval alchemy has been realized, but only for a fraction of a second. Scientists with the European Organization for Nuclear Research, better known as CERN, were able to convert lead into gold using the Large Hadron Collider (LHC), the world's most powerful particle accelerator. Unlike the examples of transmutation we see in pop culture, these experiments with the LHC involve smashing subatomic particles together at ridiculously high speeds to manipulate lead's physical properties to become gold.

The LHC is often used to smash lead ions together to create extremely hot and dense matter similar to what was observed in the universe following the Big Bang. While conducting this analysis, the CERN scientists took note of the near-misses that caused a lead nucleus to drop its neutrons or protons. Lead atoms only have three more protons than gold atoms, meaning that in certain cases the LHC causes the lead atoms to drop just enough protons to become a gold atom for a fraction of a second — before immediately fragmenting into a bunch of particles.

Alchemists back in the day may be astonished by this achievement, but the experiments conducted between 2015 and 2018 only produced about 29 picograms of gold, according to CERN. The organization added that the latest trials produced almost double that amount thanks to regular upgrades to the LHC, but the mass made is still trillions of times less than what's necessary for a piece of jewelry. Instead of trying to chase riches, the organization's scientists are more interested in studying the interaction that leads to this transmutation.

"It is impressive to see that our detectors can handle head-on collisions producing thousands of particles, while also being sensitive to collisions where only a few particles are produced at a time, enabling the study of electromagnetic 'nuclear transmutation' processes," Marco Van Leeuwen, spokesperson for the A Large Ion Collider Experiment project at the LHC, said in a statement.


Original Submission

Processed by drussell

posted by janrinok on Friday May 16, @03:01PM   Printer-friendly
from the Or-how-MBA-culture-killed-Bell-Labs dept.

canopic jug writes:

The 1517 Fund has an article exploring why Bell Labs worked so well, and what is lacking in today's society to recreate such a research environment:

There have been non-profit and corporate giants with larger war chests than Ma Bell. AT&T started Bell Labs when its revenue was under $13 B (current USD). During the great depression, when Mervin Kelly laid the foundation for the lab, AT&T's revenue was $22 B (current USD).

Inflation adjusted, Google has made more than AT&T did at Bell Labs' start since 2006. Microsoft, 1996. Apple, 1992.

Each has invested in research. None have a Bell Labs.

Academia's worse. Scientists at the height of their careers spend more time writing grants than doing research. Between 1975 and 2005, the amount of time scientists at top tier universities spent on research declined by 20%. Time spent on paperwork increased by 100%. To quote the study, "experienced secular decline in research time, on the order of 10h per week." 2

[...] Reportedly, Kelly and others would hand people problems and then check in a few years later.3 Most founders and executives I know balk at this idea. After all, "what's stopping someone from just slacking off?" Kelly would contend that's the wrong question to ask. The right question is, "Why would you expect information theory from someone who needs a babysitter?"

Micromanagement and quantification also take their toll.

Previously:
(2024) The Incredible Story Behind the First Transistor Radio
(2024) Is It Possible to Recreate Bell Labs?
(2022) Unix History: A Mighty Origin Story
(2019) Vintage Computer Federation East 2019 -- Brian Kernighan Interviews Ken Thompson
(2017) US Companies are Investing Less in Science


Original Submission

Processed by kolie

posted by hubie on Friday May 16, @10:20AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

In 2018, about 13.5 percent of the more than 2.6 million deaths from cardiovascular disease among people ages 55 to 64 globally could have been related to exposure to a type of chemical called a phthalate, researchers report April 28 in eBioMedicine.

Phthalates are a group of chemicals found in shampoos, lotions, food packaging and medical supplies including blood bags. The chemicals are often added to plastics to make them softer and more flexible.

Phthalates can enter the body when you consume contaminated food, breathe them in or absorb them through the skin. Once inside, they act as endocrine disruptors, which means they affect hormones. Previous research has also linked the chemicals to diabetes, obesity, pregnancy complications and heart disease.

The new study looked at the effects of one particular phthalate, known as di-2-ethylhexylphthalate, or DEHP, which is often added to PVC plastics to soften them. Sara Hyman, a research scientist at NYU Langone Health, and colleagues focused on the relationship between DEHP exposure levels and cardiovascular disease, the leading cause of death worldwide. Hyman and colleagues compared estimated DEHP exposure in 2008 with death rates from cardiovascular disease ten years later in different parts of the world. By studying how the two changed together, they determined what portion of those deaths might be attributable to phthalates.

More than 350,000 excess deaths worldwide were associated with DEHP exposure in 2018, the team found. About three-quarters of those occurred in the Middle East, South Asia, East Asia and the Pacific. This disparity might be due to the regions’ growing plastics industries, the researchers suggest. The new work does not show that DEHP exposure directly causes heart disease, though — only that there’s an association between the two.

[...] The findings offer yet another reason to decrease plastic use, researchers say. “We’re going to become the plastic planet,” Zhou says. “We need to start to really address this serious issue.”

S. Hyman et al. Phthalate exposure from plastics and cardiovascular disease: global estimates of attributable mortality and years life lost. eBioMedicine, 105730. Published online April 28, 2025. doi: 10.1016/j.ebiom.2025.105730.


Original Submission

posted by hubie on Friday May 16, @05:32AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

The Federal Trade Commission has delayed the start of a rule that aims to make the process of canceling subscriptions less of a nightmare. Last year, the FTC voted to ratify amendments to a regulation known as the Negative Option Rule, adding a new "click-to-cancel" rule that requires companies to be upfront about the terms of subscription signups and prohibits them "from making it any more difficult for consumers to cancel than it was to sign up." Surprising no one, telecom companies were not happy, and sued the FTC. While the rule was nevertheless set to be implemented on May 14, the FTC now says enforcement has been pushed back 60 days to July 14.

Some parts of the updated Negative Option Rule went into effect on January 19, but the enforcement of certain provisions were deferred to May 14 by the previous administration to give companies more time to comply. Under the new administration, the FTC says it has "conducted a fresh assessment of the burdens that forcing compliance by this date would impose" and decided it "insufficiently accounted for the complexity of compliance."

Once the July 14 deadline hits, the FTC says "regulated entities must be in compliance with the whole of the Rule because the Commission will begin enforcing it." But, the statement adds, "if that enforcement experience exposes problems with the Rule, the Commission is open to amending" it.

Previously:
    • Judge Rules SiriusXM's Annoying Cancellation Process is Illegal
    • The US Government Wants to Make It Easier for You to Click the 'Unsubscribe' Button
    • Clingy Virgin Media Won't Let Us Go, Customers Complain
    • Publishers and Advertisers Push Back at FTC's 'Click-to-Cancel' Proposal
    • The End of "Click to Subscribe, Call to Cancel"? - News Industry's Favorite Retention Tactic


Original Submission

posted by hubie on Friday May 16, @12:45AM   Printer-friendly

Research out of the University of Connecticut proposes neural resonance theory, which says neurons in our body physically synchronize with music that create stable patterns that affect our entire body.

In a nutshell
       • Brain-music synchronization: Your brain doesn't just predict music patterns—it physically synchronizes with them through neural oscillations that affect your entire body.
        • Stability creates preference: Musical sounds with simple frequency relationships (like perfect fifths) create more stable neural patterns, explaining why certain combinations sound pleasant across cultures.
        • Cultural attunement: While some aspects of music perception are universal, your brain becomes "attuned" to the music you frequently hear, explaining cultural preferences while maintaining recognition of basic musical structures.

What is Neural Resonance Theory?

Neural Resonance Theory (NRT) is a scientific approach that explains how your brain processes music using fundamental physics principles rather than abstract predictions.

In simpler terms, NRT suggests that:

    • Your brain contains billions of neurons that naturally oscillate (rhythmically fire) at different frequencies
    • When you hear music, these neural oscillations physically synchronize with the sound waves
    • This synchronization creates stable patterns in your brain that correspond to musical elements
    • The more stable these patterns are, the more pleasant or "right" the music feels

Unlike traditional theories that say your brain is constantly making predictions about what comes next in music, NRT proposes that your brain actually embodies the music's structure through its own physical patterns.

This physical synchronization explains why music can directly affect your movements and emotions without conscious thought—your brain and body are literally vibrating in harmony with the music.

Read the rest of the article: https://studyfinds.org/brain-cells-synchronize-to-music/

Journal Reference: Harding, E.E., Kim, J.C., Demos, A.P. et al. Musical neurodynamics. Nat. Rev. Neurosci. 26, 293–307 (2025). https://doi.org/10.1038/s41583-025-00915-4


Original Submission

posted by janrinok on Thursday May 15, @08:00PM   Printer-friendly
from the ai-job-watch dept.

Arthur T Knackerbracket has processed the following story:

Just two years ago, prompt engineering was the talk of the tech world – a seemingly essential new job born from the rapid rise of artificial intelligence. Companies were eager to hire specialists who could craft the right questions for large language models, ensuring optimal AI performance. The role was accessible, required little technical background, and was seen as a promising entry point into a booming industry.

Today, however, prompt engineering as a standalone role has all but disappeared. What was once a highly touted skill set is now simply expected of anyone working with AI. In an ironic twist, some companies are even using AI to generate prompts for their own AI systems, further diminishing the need for human prompt engineers.

The brief rise and rapid fall of prompt engineering highlights a broader truth about the AI job market: new roles can vanish as quickly as they appear. "AI is already eating its own," says Malcolm Frank, CEO of TalentGenius, in an interview with Fast Company.

"Prompt engineering has become something that's embedded in almost every role, and people know how to do it. Also, now AI can help you write the perfect prompts that you need. It's turned from a job into a task very, very quickly."

The initial appeal of prompt engineering was its low barrier to entry. Unlike many tech roles, it didn't require years of specialized education or coding experience, making it especially attractive to job seekers hoping to break into AI. In 2023, LinkedIn profiles were filled with self-described prompt engineers, and the North American market for prompt engineering was valued at $75.5 million, growing at a rate of 32.8 percent annually.

Yet the hype outpaced reality. According to Allison Shrivastava, an economist at the Indeed Hiring Lab, prompt engineering was rarely listed as an official job title. Instead, it has typically been folded into roles like machine learning engineer or automation architect. "I'm not seeing it as a standalone job title," she added.

As the hype fades, the AI job market is shifting toward roles that require deeper technical expertise. The distinction is clear: while prompt engineers focused on crafting queries for LLMs, machine learning engineers are the ones building and improving those models.

Lerner notes that demand for mock interviews for machine learning engineers has surged, increasing more than threefold in just two months. "The future is working on the LLM itself and continuing to make it better and better, rather than needing somebody to interpret it," she says.

This shift is also evident in hiring trends. Shrivastava points out that while demand for general developers is declining, demand for engineering roles overall is rising. For those without a coding background, options are narrowing.

Founding a company or moving into management consulting, where expertise in AI implementation is increasingly valued, may be the best routes forward. As of February, consulting positions made up 12.4% of AI job titles on Indeed, signaling a boom in advisory roles as organizations seek to integrate AI into their operations.

Tim Tully, a partner at Menlo Ventures, has seen firsthand how AI is changing the nature of work, not necessarily by creating new jobs, but by reshaping existing ones. "I wouldn't say that [there are] new jobs, necessarily; it's more so that it's changing how people work," Tully says. "You're using AI all the time now, whether you like it or not, and it's accelerating what you do."


Original Submission

Processed by kolie

posted by hubie on Thursday May 15, @03:16PM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

This engineering marvel necessitated custom userspace GPU drivers and probably a patched adapter firmware as well.

External GPU (eGPU) support on Apple Silicon Macs and MacBooks has been a persistent pain point for AI/ML developers. Through what some may consider to be black magic, Tiny Corp has managed to get an AMD eGPU working in Tiny Grad over USB3, a standard that inherently lacks PCIe capabilities. As they're using libusb, this functionality extends to Windows, Linux, and even macOS, including devices with Apple Silicon.

Traditionally, GPUs are connected through PCIe slots or the Thunderbolt/USB4 interfaces, which offer PCI Express tunneling support. As such, external GPU solutions rely on the aforementioned interfaces, which limits their support for older systems and laptops. Unlike Intel-based Macs/MacBooks, Apple Silicon based devices do not support external GPUs, mainly due to the lack of driver support and architectural differences. So, despite their efficiency compared to traditional x86-based systems, users have reported challenges in AI workloads, especially when it comes to prompt processing.

Requirements for running an eGPU through a USB3 interface at this time include the use of an ASM2464PD-based adapter and an AMD GPU. For its tests, Tiny Corp used the ADT-UT3G adapter, which uses the same ASM2464PD chip, but out of the box, it only works with Thunderbolt 3, Thunderbolt 4, or USB 4 interfaces. The team likely employed a custom firmware to enable USB3 mode that works without direct PCIe communication. Technical details are murky, however, the controller appears to be translating PCIe commands to USB packets and vice versa.

The solution is quite hacky, as it bypasses kernel-level GPU drivers, requires specific hardware, and uses USB3, which was not originally intended for GPU communication. It essentially offloads the computation part, referring to kernel executions, from your system to the eGPU. The constraint here is that data transfer speeds are capped at 10 Gbps due to the USB3 standard used, so loading models into the GPU will take much longer than if you were to use a standard PCIe connection.

Since it uses custom user-space drivers to avoid tinkering with the kernel, the feature is limited to AMD's RDNA 3/4 GPUs, although there's a hint of potential RDNA 2 support in the future. USB3 eGPU functionality has been upstreamed to Tiny Grad's master branch, so if you have an AMD GPU and a supported adapter, feel free to try it out. We can expect Tiny Corp to provide a more detailed and technical breakdown once its developers done tidying up the code.


Original Submission

posted by hubie on Thursday May 15, @10:30AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

Lazarus 4 is the latest version of the all-FOSS but Delphi-compatible IDE for the FreePascal compiler.

The IDE is developed independently from the underlying Pascal language that it's written in, so this doesn't mean a whole new version of FreePascal: Lazarus 4 was built with FreePascal 3.2.2 which was released in 2021. It replaces Lazarus 3.8, which was the current version when we talked about it for Delphi's 30th anniversary back in February.

[...] It's a multi-platform IDE, and the Sourceforge page has packages for both 32-bit and 64-bit Windows, Linux, and FreeBSD. On Apple hardware, it offers PowerPC, x86 and Arm64 versions; Cocoa development needs macOS 12 or higher, but using the older Carbon APIs it supports OS X 10.5 to 10.14. There's also a Raspberry Pi version for the Pi 4 and later. It supports a wide variety of toolkits for GUI programming, as the project wiki shows: Win32, Gtk2 and work-in-progress Gtk3, and Qt versions 4, 5 and 6, among others.

One criticism we've seen of the FreePascal project in general concerns its documentation, although there is quite a lot of it: eight FPC manuals, and lengthy Lazarus docs in multiple languages. There is a paid-for tutorial e-book available, too.

Something which might help newcomers to the language here is a new e-book: FreePascal From Square One by Jeff Duntemann. The author says:

It's a distillation of the four editions of my Pascal tutorial, Complete Turbo Pascal, which first appeared in 1985 and culminated in Borland Pascal 7 From Square One in 1993. I sold a lot of those books and made plenty of money, so I'm now giving it away, in hopes of drawing more people into the Pascal universe.

[...] There are other free resources out there, such as this course in Modern Pascal. The more, the merrier, though.

Pascal isn't cool or trendy any more, but even so, it remains in the top ten on the TIOBE index. Perhaps these new releases will help it to rise up the ratings a little more.


Original Submission

posted by hubie on Thursday May 15, @05:47AM   Printer-friendly

Arthur T Knackerbracket has processed the following story:

The head of the US Copyright Office has reportedly been fired, the day after agency concluded that builders of AI models use of copyrighted material went beyond existing doctrines of fair use.

The office’s opinion on fair use came in a draft of the third part of its report on copyright and artificial intelligence. The first part considered digital replicas and the second tackled whether it is possible to copyright the output of generative AI.

The office published the draft [PDF] of Part 3, which addresses the use of copyrighted works in the development of generative AI systems, on May 9th.

The draft notes that generative AI systems “draw on massive troves of data, including copyrighted works” and asks: “Do any of the acts involved require the copyright owners’ consent or compensation?”

That question is the subject of several lawsuits, because developers of AI models have admitted to training their products on content scraped from the internet and other sources without compensating content creators or copyright owners. AI companies have argued fair use provisions of copyright law mean they did no wrong.

As the report notes, one test courts use to determine fair use considers “the effect of the use upon the potential market for or value of the copyrighted work”. If a judge finds an AI company’s use of copyrighted material doesn’t impact a market or value, fair use will apply.

The report finds AI companies can’t sustain a fair use defense in the following circumstances:

When a model is deployed for purposes such as analysis or research… the outputs are unlikely to substitute for expressive works used in training. But making commercial use of vast troves of copyrighted works to produce expressive content that competes with them in existing markets, especially where this is accomplished through illegal access, goes beyond established fair use boundaries.

The office will soon publish a final version of Part 3 that it expects will emerge “without any substantive changes expected in the analysis or conclusions.”

Tech law professor Blake. E Reid described the report as “very bad news for the AI companies in litigation” and “A straight-ticket loss for the AI companies”.

Among the AI companies currently in litigation on copyright matters are Google, Meta, OpenAI, and Microsoft. All four made donations to Donald Trump’s inauguration fund.

Reid’s post also pondered the timing of the Part 3 report – despite the office saying it was released “in response to congressional inquiries and expressions of interest from stakeholders” – and wrote “I continue to wonder (speculatively!) if a purge at the Copyright Office is incoming and they felt the need to rush this out.”

Reid looks prescient as the Trump administration reportedly fired the head of the Copyright Office, Shira Perlmutter, on Saturday.

Representative Joe Morelle (D-NY), wrote the termination was “…surely no coincidence he acted less than a day after she refused to rubber-stamp Elon Musk’s efforts to mine troves of copyrighted works to train AI models.”

[...] There’s another possible explanation for Perlmutter’s ousting: The Copyright Office is a department of the Library of Congress, whose leader was last week fired on grounds of “quite concerning things that she had done … in the pursuit of DEI [diversity, equity, and inclusion] and putting inappropriate books in the library for children," according to White House press secretary Karoline Leavitt.

So maybe this is just the Trump administration enacting its policy on diversity without regard to the report’s possible impact on donors or Elon Musk.


Original Submission

posted by janrinok on Thursday May 15, @01:01AM   Printer-friendly
from the fishing-for-fissiles dept.

Arthur T Knackerbracket has processed the following story:

Chinese researchers have developed an extremely energy efficient and low-cost technology for extracting uranium from seawater, a potential boon to the country’s nuclear power ambitions. China currently leads the world in building new nuclear power plants, and shoring up its supply of uranium will help these efforts.

The world’s oceans hold an estimated 4.5 billion tonnes of uranium – more than 1000 times that available to mining – but it is extremely dilute. Previous experimental efforts have harvested uranium from seawater by physically soaking it up with artificial sponges or a polymer material inspired by blood vessel patterns, or by the more efficient and more expensive electrochemical method of trapping uranium atoms with electric fields.

This approach was able to extract 100 per cent of the uranium atoms from a salty seawater-like solution within 40 minutes. By comparison, some physical adsorption methods extract less than 10 per cent of the available uranium.

The system is “very innovative” and “a significant step forward compared to… existing uranium extraction methods”, says Shengqian Ma at the University of North Texas, who wasn’t involved in the new research.

[...]

When tested with small amounts of natural seawater – about 1 litre running through the system at any time – the new method was able to extract 100 per cent of uranium from East China Sea water and 85 per cent from South China Sea water. In the latter case, the researchers also achieved 100 per cent extraction with larger electrodes.

The experiments also showed the energy required was more than 1000-fold less than other electrochemical methods. The whole process cost about $83 per kilogram of extracted uranium. That is twice as cheap as physical adsorption methods, which cost about $205 per kilogram, and four times as cheap as previous electrochemical methods, which cost $360 per kilogram.

Scaling up the size and volume of the new devices – along with potentially stacking or connecting them together – could lead to “industrialisation of uranium extraction from seawater in the future”, the researchers wrote. Given a 58-hour test in 100 litres of seawater, their largest experimental array extracted more than 90 per cent of the available uranium.

One of the most successful previous demonstrations of harvesting uranium from seawater came in the 1990s, when the Japan Atomic Energy Agency extracted a kilogram of the element from the ocean using a physical adsorption method. That set a milestone that has inspired Chinese academic and industry researchers ever since.

In 2019, a Chinese state-owned nuclear company teamed up with research institutes to form the Seawater Uranium Extraction Technology Innovation Alliance. This organisation aims to build a demonstration plant by 2035 and achieve ongoing industrial production by 2050, according to the South China Morning Post.

“From an engineering perspective, there is still a long way to go before implementing this method and any electrochemical-based method for large-scale uranium extraction from seawater,” says Ma.

Half of the nuclear reactor projects currently under construction are in China. The country is on track to surpass the US and the European Union in total installed nuclear power capacity by 2030, according to the International Energy Agency.

But China’s nuclear industry also imports most of the uranium that it uses. So any it can economically extract from seawater will be more than welcome.

Journal reference

Nature Sustainability DOI: 1038/s41893-025-01567-z


Original Submission

Processed by kolie